When AI Eats the Bottom Rung of the Career Ladder

Ben Lorica and Evangelos Simoudis on AI’s Talent Crisis, Hardware Depreciation, and OpenAI’s Router Strategy.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

 

Ben Lorica and Evangelos Simoudis discuss three critical trends shaping the AI industry: the “Great Hollowing Out” where AI automation is eliminating entry-level positions and disrupting talent pipelines, the financial implications of rapid AI hardware depreciation versus traditional accounting practices, and OpenAI’s shift to router-based model selection as foundation model providers search for sustainable business models.

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Jump to transcript



Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a heavily edited excerpt, in Question & Answer format.

Part I: The Great AI “Hollowing Out” – When Entry-Level Jobs Disappear

What exactly is the “AI Hollowing Out” phenomenon, and why should companies building AI solutions care about it?

The AI Hollowing Out refers to the elimination or significant reduction of entry-level positions as companies increasingly automate junior-level tasks. This creates a critical talent pipeline problem: without entry-level positions, organizations lose their primary source for developing senior talent. For AI teams, this means the pool of experienced practitioners you’ll be able to hire in 5-10 years is actively shrinking today. The phenomenon started becoming visible around 2019 but has accelerated dramatically with generative AI adoption.

What concrete data supports this trend beyond anecdotal evidence?

According to New York Fed data from March 2025, recent college graduate unemployment sits at 5.8%, up from 5.3-5.4% in previous months. More concerning is underemployment: 41% of recent graduates and 33% of all college graduates are underemployed. This means they’re working in positions that don’t require their degree or skill level. For tech companies, this translates to a massive waste of potential talent that could be developing AI expertise.

Which specific job categories are being impacted, and how does this affect AI development teams?

Three tiers of impact exist:

  • Entry-level low-skill white collar jobs (email responses, basic scripting) are already being replaced
  • Entry-level high-skill positions (paralegal, customer support, certain programming tasks) are being partially automated but cannot be fully replaced with current technology
  • High-skill white collar work (lawyers, senior engineers) remains largely unaffected but sees significant productivity gains

For AI teams, this means junior developer positions and data annotation roles are disappearing, making it harder to build diverse, multi-level teams.

How does losing “grunt work” positions actually hurt organizations in the long term?

These seemingly mundane tasks serve as critical learning experiences where junior employees understand the business, build domain expertise, and learn from senior staff through socialization. In AI development, tasks like data cleaning, basic model testing, and documentation review aren’t just busy work—they’re how junior team members learn the nuances of production AI systems. Without these experiences, the next generation lacks the foundational understanding needed to become experts.

What’s this “AI doom loop” in hiring, and how does it affect talent acquisition?

The doom loop works like this: job seekers use AI to generate resumes and applications, companies use AI to screen these applications, fewer humans get through the process, and entry-level positions become even harder to fill. For AI teams trying to hire, this means potentially missing talented candidates whose AI-generated applications get filtered out by AI screening tools—an ironic but serious problem that requires rethinking recruitment strategies.

Are new AI-related jobs being created to offset these losses?

Yes, but with critical caveats. Manufacturing has 400,000 open positions requiring knowledge of intelligently automated systems. Autonomous vehicle companies need teleoperators instead of drivers. However, these new positions require specialized skills and exist in much smaller numbers. For example, Uber employs millions of gig drivers, but the autonomous vehicle industry will need far fewer teleoperators. AI teams should focus on creating clear pathways for existing employees to gain these specialized skills rather than assuming the market will provide trained talent.

What should organizations building AI solutions do to address this hollowing out?

Companies need to reimagine apprenticeship programs and redefine entry-level positions for the AI age. This might mean creating hybrid roles where juniors work alongside AI tools rather than being replaced by them. Research shows that mixed-experience teams outperform homogeneous expert groups, so maintaining a talent pipeline isn’t just about future-proofing—it directly impacts current team performance. Consider creating “AI-augmented junior” positions where new graduates learn to leverage AI tools while still gaining essential hands-on experience.

Part II: The Hidden Hardware Crisis – When AI Assets Depreciate Faster Than Balance Sheets

What’s the controversy around AI hardware depreciation that Jim Chanos highlighted?

Chanos observed that Meta depreciates AI chips over 11-12 years in their financial statements, but the actual useful life based on resale value might be only 2-3 years. This gap between accounting treatment and economic reality could mean companies are significantly overstating their asset values. For teams building AI solutions, this impacts decisions about whether to own hardware versus using cloud services.

Why does hardware depreciation matter for teams building AI applications?

Chips represent 60-70% or more of data center capital expenditure. If these assets lose value in 2-3 years rather than 11-12 years, the true cost of running AI infrastructure is much higher than reported. This affects build-versus-buy decisions, cloud versus on-premise strategies, and the real ROI calculations for AI projects. Teams need to factor in rapid obsolescence when planning infrastructure investments.

Can’t companies just keep using older chips for less demanding tasks like inference?

This is the counter-argument to rapid depreciation: older chips can handle inference workloads even if they’re not cutting-edge for training. However, the key question becomes how much of your infrastructure budget goes toward replacing obsolete hardware versus adding new capacity for growth. Organizations need to segment their workloads carefully—using latest hardware only where absolutely necessary while relegating older chips to less demanding tasks.

What does the DeepSeek experience teach us about hardware requirements?

DeepSeek demonstrated that you don’t always need the latest and greatest hardware to provide value. This suggests organizations should create clear gradations of where cutting-edge hardware is essential versus where existing capacity suffices. For AI teams, this means carefully evaluating whether each use case truly requires the newest GPUs or if older hardware can deliver acceptable performance at lower cost.

How should AI teams factor hardware depreciation into their planning?

Teams should watch for signs in financial statements about hardware write-offs and replacements, not just the stated depreciation schedule. In 2-3 years, we’ll see whether capital budgets go primarily toward replacing existing chips or adding new capacity—this will reveal the true depreciation rate. For now, assume faster depreciation than official statements suggest when calculating long-term infrastructure costs.

Part III: The Router Revolution – How Model Access Is Being Transformed

What changed with ChatGPT’s new router-based interface, and why does it matter?

OpenAI replaced granular model selection with three router options that automatically direct queries to different models based on various factors. This shift from user-controlled to platform-controlled model selection represents a fundamental change in how we access AI capabilities. For teams building applications, this means less control over specific model usage but potentially more optimized cost-performance ratios.

What’s driving this shift toward routing models?

Foundation model providers are searching for sustainable business models. They’re losing money on most subscribers even with paid tiers. Routing allows them to optimize costs by directing simpler queries to smaller models while reserving expensive compute for complex tasks. This mirrors Google’s early 2000s transition from a search utility to an advertising business model—expect similar experimentation with monetization strategies.

How does routing affect the actual AI capabilities available to users and developers?

Routing systems evaluate each query’s value and assign appropriate processing power. A restaurant recommendation gets different treatment than a medical referral request. For development teams, this means less predictable model behavior and potential variability in response quality. You’ll need to design applications that can handle this variability gracefully.

What are the implications for API users and application developers?

Even paid subscribers face usage limits when they exceed certain thresholds, indicating providers are actively managing resource allocation. For production applications, this means potentially unpredictable rate limits and quality variations. Teams should build redundancy into their systems and not rely on single provider APIs for critical functions.

Will other providers follow OpenAI’s routing approach?

The prediction is yes—Google, Anthropic, and others will likely adopt similar strategies. They face the same fundamental challenge of monetizing expensive infrastructure. For AI teams, this means preparing for a future where direct model access becomes rare and routing layers become standard across all major providers.